61 research outputs found

    Subliminal versus supraliminal stimuli activate neural responses in anterior cingulate cortex, fusiform gyrus and insula:a meta-analysis of fMRI studies

    Get PDF
    Background: Non-conscious neural activation may underlie various psychological functions in health and disorder. However, the neural substrates of non-conscious processing have not been entirely elucidated. Examining the differential effects of arousing stimuli that are consciously, versus unconsciously perceived will improve our knowledge of neural circuitry involved in non-conscious perception. Here we conduct preliminary analyses of neural activation in studies that have used both subliminal and supraliminal presentation of the same stimulus. Methods: We use Activation Likelihood Estimation (ALE) to examine functional Magnetic Resonance Imaging (fMRI) studies that uniquely present the same stimuli subliminally and supraliminally to healthy participants during functional magnetic resonance imaging (fMRI). We included a total of 193 foci from 9 studies representing subliminal stimulation and 315 foci from 10 studies representing supraliminal stimulation. Results: The anterior cingulate cortex is significantly activated during both subliminal and supraliminal stimulus presentation. Subliminal stimuli are linked to significantly increased activation in the right fusiform gyrus and right insula. Supraliminal stimuli show significantly increased activation in the left rostral anterior cingulate. Conclusions: Non-conscious processing of arousing stimuli may involve primary visual areas and may also recruit the insula, a brain area involved in eventual interoceptive awareness. The anterior cingulate is perhaps a key brain region for the integration of conscious and non-conscious processing. These preliminary data provide candidate brain regions for further study in to the neural correlates of conscious experience

    How Bodies and Voices Interact in Early Emotion Perception

    Get PDF
    Successful social communication draws strongly on the correct interpretation of others' body and vocal expressions. Both can provide emotional information and often occur simultaneously. Yet their interplay has hardly been studied. Using electroencephalography, we investigated the temporal development underlying their neural interaction in auditory and visual perception. In particular, we tested whether this interaction qualifies as true integration following multisensory integration principles such as inverse effectiveness. Emotional vocalizations were embedded in either low or high levels of noise and presented with or without video clips of matching emotional body expressions. In both, high and low noise conditions, a reduction in auditory N100 amplitude was observed for audiovisual stimuli. However, only under high noise, the N100 peaked earlier in the audiovisual than the auditory condition, suggesting facilitatory effects as predicted by the inverse effectiveness principle. Similarly, we observed earlier N100 peaks in response to emotional compared to neutral audiovisual stimuli. This was not the case in the unimodal auditory condition. Furthermore, suppression of beta–band oscillations (15–25 Hz) primarily reflecting biological motion perception was modulated 200–400 ms after the vocalization. While larger differences in suppression between audiovisual and audio stimuli in high compared to low noise levels were found for emotional stimuli, no such difference was observed for neutral stimuli. This observation is in accordance with the inverse effectiveness principle and suggests a modulation of integration by emotional content. Overall, results show that ecologically valid, complex stimuli such as joined body and vocal expressions are effectively integrated very early in processing

    Audiovisual Non-Verbal Dynamic Faces Elicit Converging fMRI and ERP Responses

    Get PDF
    In an everyday social interaction we automatically integrate another’s facial movements and vocalizations, be they linguistic or otherwise. This requires audiovisual integration of a continual barrage of sensory input—a phenomenon previously well-studied with human audiovisual speech, but not with non-verbal vocalizations. Using both fMRI and ERPs, we assessed neural activity to viewing and listening to an animated female face producing non-verbal, human vocalizations (i.e. coughing, sneezing) under audio-only (AUD), visual-only (VIS) and audiovisual (AV) stimulus conditions, alternating with Rest (R). Underadditive effects occurred in regions dominant for sensory processing, which showed AV activation greater than the dominant modality alone. Right posterior temporal and parietal regions showed an AV maximum in which AV activation was greater than either modality alone, but not greater than the sum of the unisensory conditions. Other frontal and parietal regions showed Common-activation in which AV activation was the same as one or both unisensory conditions. ERP data showed an early superadditive effect (AV > AUD + VIS, no rest), mid-range underadditive effects for auditory N140 and face-sensitive N170, and late AV maximum and common-activation effects. Based on convergence between fMRI and ERP data, we propose a mechanism where a multisensory stimulus may be signaled or facilitated as early as 60 ms and facilitated in sensory-specific regions by increasing processing speed (at N170) and efficiency (decreasing amplitude in auditory and face-sensitive cortical activation and ERPs). Finally, higher-order processes are also altered, but in a more complex fashion

    Emotional Cues during Simultaneous Face and Voice Processing: Electrophysiological Insights

    Get PDF
    Both facial expression and tone of voice represent key signals of emotional communication but their brain processing correlates remain unclear. Accordingly, we constructed a novel implicit emotion recognition task consisting of simultaneously presented human faces and voices with neutral, happy, and angry valence, within the context of recognizing monkey faces and voices task. To investigate the temporal unfolding of the processing of affective information from human face-voice pairings, we recorded event-related potentials (ERPs) to these audiovisual test stimuli in 18 normal healthy subjects; N100, P200, N250, P300 components were observed at electrodes in the frontal-central region, while P100, N170, P270 were observed at electrodes in the parietal-occipital region. Results indicated a significant audiovisual stimulus effect on the amplitudes and latencies of components in frontal-central (P200, P300, and N250) but not the parietal occipital region (P100, N170 and P270). Specifically, P200 and P300 amplitudes were more positive for emotional relative to neutral audiovisual stimuli, irrespective of valence, whereas N250 amplitude was more negative for neutral relative to emotional stimuli. No differentiation was observed between angry and happy conditions. The results suggest that the general effect of emotion on audiovisual processing can emerge as early as 200 msec (P200 peak latency) post stimulus onset, in spite of implicit affective processing task demands, and that such effect is mainly distributed in the frontal-central region

    ENIGMA-anxiety working group : Rationale for and organization of large-scale neuroimaging studies of anxiety disorders

    Get PDF
    Altres ajuts: Anxiety Disorders Research Network European College of Neuropsychopharmacology; Claude Leon Postdoctoral Fellowship; Deutsche Forschungsgemeinschaft (DFG, German Research Foundation, 44541416-TRR58); EU7th Frame Work Marie Curie Actions International Staff Exchange Scheme grant 'European and South African Research Network in Anxiety Disorders' (EUSARNAD); Geestkracht programme of the Netherlands Organization for Health Research and Development (ZonMw, 10-000-1002); Intramural Research Training Award (IRTA) program within the National Institute of Mental Health under the Intramural Research Program (NIMH-IRP, MH002781); National Institute of Mental Health under the Intramural Research Program (NIMH-IRP, ZIA-MH-002782); SA Medical Research Council; U.S. National Institutes of Health grants (P01 AG026572, P01 AG055367, P41 EB015922, R01 AG060610, R56 AG058854, RF1 AG051710, U54 EB020403).Anxiety disorders are highly prevalent and disabling but seem particularly tractable to investigation with translational neuroscience methodologies. Neuroimaging has informed our understanding of the neurobiology of anxiety disorders, but research has been limited by small sample sizes and low statistical power, as well as heterogenous imaging methodology. The ENIGMA-Anxiety Working Group has brought together researchers from around the world, in a harmonized and coordinated effort to address these challenges and generate more robust and reproducible findings. This paper elaborates on the concepts and methods informing the work of the working group to date, and describes the initial approach of the four subgroups studying generalized anxiety disorder, panic disorder, social anxiety disorder, and specific phobia. At present, the ENIGMA-Anxiety database contains information about more than 100 unique samples, from 16 countries and 59 institutes. Future directions include examining additional imaging modalities, integrating imaging and genetic data, and collaborating with other ENIGMA working groups. The ENIGMA consortium creates synergy at the intersection of global mental health and clinical neuroscience, and the ENIGMA-Anxiety Working Group extends the promise of this approach to neuroimaging research on anxiety disorders

    Review

    No full text

    Association of trait emotional intelligence and individual fMRI-activation patterns during the perception of social signals from voice and face

    No full text
    Multimodal integration of nonverbal social signals is essential for successful social interaction. Previous studies have implicated the posterior superior temporal sulcus (pSTS) in the perception of social signals such as nonverbal emotional signals as well as in social cognitive functions like mentalizing/theory of mind. In the present study, we evaluated the relationships between trait emotional intelligence (EI) and fMRI activation patterns in individual subjects during the multimodal perception of nonverbal emotional signals from voice and face. Trait EI was linked to hemodynamic responses in the right pSTS, an area which also exhibits a distinct sensitivity to human voices and faces. Within all other regions known to subserve the perceptual audiovisual integration of human social signals (i.e., amygdala, fusiform gyrus, thalamus), no such linked responses were observed. This functional difference in the network for the audiovisual perception of human social signals indicates a specific contribution of the pSTS as a possible interface between the perception of social information and social cognition
    corecore